Conversation
1890455 to
894dff2
Compare
|
should we add follow features:
etc... |
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ff46ca8033
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
|
@wtomin @SamitHuang @ZJY0516 ptal thx |
There was a problem hiding this comment.
Pull request overview
This PR introduces a comprehensive ComfyUI integration for vLLM-Omni, enabling visual workflow-based inference for multimodal AI models through ComfyUI's node system. The integration provides nodes for image generation, multimodal comprehension, and text-to-speech tasks, supporting both single-stage and multi-stage model pipelines with configurable sampling parameters.
Changes:
- Adds ComfyUI custom nodes for vLLM-Omni online serving API
- Implements API client with support for image generation, editing, comprehension, and TTS
- Provides sampling parameter nodes for autoregression and diffusion stages
- Includes documentation, example workflows, and CI/CD workflows for publishing to ComfyUI registry
Reviewed changes
Copilot reviewed 30 out of 36 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
apps/ComfyUI-vLLM-Omni/__init__.py |
Plugin entry point defining node mappings and display names |
apps/ComfyUI-vLLM-Omni/vllm_omni/nodes.py |
Core node implementations for generation and sampling parameters |
apps/ComfyUI-vLLM-Omni/vllm_omni/utils/api_client.py |
Async HTTP client for vLLM-Omni API endpoints |
apps/ComfyUI-vLLM-Omni/vllm_omni/utils/format.py |
Format conversion utilities for images, video, and audio |
apps/ComfyUI-vLLM-Omni/vllm_omni/utils/validators.py |
Validation logic for model specs and sampling parameters |
apps/ComfyUI-vLLM-Omni/vllm_omni/utils/models.py |
Model pipeline specifications and payload preprocessors |
apps/ComfyUI-vLLM-Omni/vllm_omni/utils/logger.py |
Logging configuration with base64 redaction |
apps/ComfyUI-vLLM-Omni/vllm_omni/utils/types.py |
Type definitions for audio formats and model specifications |
apps/ComfyUI-vLLM-Omni/web/main.js |
Frontend extension (mostly commented out) |
apps/ComfyUI-vLLM-Omni/web/utils.js |
Multiline text widget utilities |
apps/ComfyUI-vLLM-Omni/pyproject.toml |
Package configuration and metadata |
apps/ComfyUI-vLLM-Omni/README.md |
User-facing documentation and quickstart guide |
apps/ComfyUI-vLLM-Omni/LICENSE |
Apache 2.0 license |
tests/comfyui/test_example.py |
Basic smoke test for node instantiation |
tests/comfyui/conftest.py |
Test configuration for path setup |
.github/workflows/comfyui-validate.yml |
CI workflow for backward compatibility validation |
.github/workflows/comfyui-publish.yml |
CI workflow for publishing to ComfyUI registry |
.github/workflows/build_wheel.yml |
Updated to exclude apps directory from build triggers |
docs/features/comfyui.md |
Feature documentation for the integration |
docs/.nav.yml |
Added ComfyUI to documentation navigation |
.gitignore |
Allows example workflow JSON files |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Ah yes, we have had this discussion today, and now the readme and this PR have added a notice that connecting multiple model services are not tested. I can help test in the future and add relevant documentation and example workflows.
And LoRA as well! |
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
|
LGTM, look forward to follow pr |
|
in the follow-up PR, please test Hunyuan Image 3.0 instruct model, we are going to use this model for demonstration and blogpost |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 22 out of 27 changed files in this pull request and generated 18 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu 11222265+fhfuih@users.noreply.github.com
Purpose
Design a one-in-all ComfyUI Integration for vLLM-Omni.
Close #900 (discussion about the UI design can go there)
Draft progress
Video generationPending API support ([Feature] Support Wan2.2 T2V and I2V Online Serving with OpenAI /v1/videos API #1073, [Bug]: Diffusion chat completion failed: 'numpy.ndarray' object has no attribute 'save' #793)Features I have experimented:
(This section is also added to plugin README)
The following features are tested:
The following features are not currently tested. They will be tested in the future, and the READMEs will be updated accordingly
Release Note
apps/ComfyUI-vLLM-Omni. Please check out the README in this folder for installation instructions.Test Plan
No test for now. The test is difficult to add due to the following reasons:
AsyncOmniin a subprocess. This is difficult to achieve, and may be introduced in another PR later.For now, we rely on the existing entrypoint API tests to ensure that the API doesn't change.
The tests described above are WIP in my other branch https://github.com/fhfuih/vllm-omni/tree/comfyui-test. I will create another PR when it is ready.
Test Result
N/A
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model. TODO: Will add laterScreenshots:
BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)